93 research outputs found

    Fast Compressed Segmentation Volumes for Scientific Visualization

    Full text link
    Voxel-based segmentation volumes often store a large number of labels and voxels, and the resulting amount of data can make storage, transfer, and interactive visualization difficult. We present a lossless compression technique which addresses these challenges. It processes individual small bricks of a segmentation volume and compactly encodes the labelled regions and their boundaries by an iterative refinement scheme. The result for each brick is a list of labels, and a sequence of operations to reconstruct the brick which is further compressed using rANS-entropy coding. As the relative frequencies of operations are very similar across bricks, the entropy coding can use global frequency tables for an entire data set which enables efficient and effective parallel (de)compression. Our technique achieves high throughput (up to gigabytes per second both for compression and decompression) and strong compression ratios of about 1% to 3% of the original data set size while being applicable to GPU-based rendering. We evaluate our method for various data sets from different fields and demonstrate GPU-based volume visualization with on-the-fly decompression, level-of-detail rendering (with optional on-demand streaming of detail coefficients to the GPU), and a caching strategy for decompressed bricks for further performance improvement.Comment: IEEE Vis 202

    Sampling Projected Spherical Caps in Real Time

    Get PDF
    Stochastic shading with area lights requires methods to sample the light sources. For diffuse materials, the best strategy is to sample proportionally to projected solid angle. Recent work in offline rendering has addressed this problem for spherical light sources, but the solution is unsuitable for a GPU implementation. We present a far more efficient solution. It offers results without noteworthy noise for diffuse surfaces lit by an unoccluded spherical light source while being only two to three times more costly than simple sampling of the solid angle. The core insight of the technique is that a projected spherical cap can be decomposed into, or at least approximated by, cut disks. We present an efficient method to sample cut disks and show how to use it to sample projected spherical caps. In some cases, our method does not sample exactly proportionally to projected solid angle but the deviation is provably bounded

    Void-and-Cluster Sampling of Large Scattered Data and Trajectories

    Full text link
    We propose a data reduction technique for scattered data based on statistical sampling. Our void-and-cluster sampling technique finds a representative subset that is optimally distributed in the spatial domain with respect to the blue noise property. In addition, it can adapt to a given density function, which we use to sample regions of high complexity in the multivariate value domain more densely. Moreover, our sampling technique implicitly defines an ordering on the samples that enables progressive data loading and a continuous level-of-detail representation. We extend our technique to sample time-dependent trajectories, for example pathlines in a time interval, using an efficient and iterative approach. Furthermore, we introduce a local and continuous error measure to quantify how well a set of samples represents the original dataset. We apply this error measure during sampling to guide the number of samples that are taken. Finally, we use this error measure and other quantities to evaluate the quality, performance, and scalability of our algorithm.Comment: To appear in IEEE Transactions on Visualization and Computer Graphics as a special issue from the proceedings of VIS 201

    Stochastic Volume Rendering of Multi-Phase SPH Data

    Get PDF
    In this paper, we present a novel method for the direct volume rendering of large smoothed‐particle hydrodynamics (SPH) simulation data without transforming the unstructured data to an intermediate representation. By directly visualizing the unstructured particle data, we avoid long preprocessing times and large storage requirements. This enables the visualization of large, time‐dependent, and multivariate data both as a post‐process and in situ. To address the computational complexity, we introduce stochastic volume rendering that considers only a subset of particles at each step during ray marching. The sample probabilities for selecting this subset at each step are thereby determined both in a view‐dependent manner and based on the spatial complexity of the data. Our stochastic volume rendering enables us to scale continuously from a fast, interactive preview to a more accurate volume rendering at higher cost. Lastly, we discuss the visualization of free‐surface and multi‐phase flows by including a multi‐material model with volumetric and surface shading into the stochastic volume rendering

    GPU Cost Estimation for Load Balancing in Parallel Ray Tracing

    Get PDF
    Interactive ray tracing has seen enormous progress in recent years. However, advanced rendering techniques requiring many million rays per second are still not feasible at interactive speed, and are only possible by means of highly parallel ray tracing. When using compute clusters, good load balancing is crucial in order to fully exploit the available computational power, and to not suffer from the overhead involved by synchronization barriers. In this paper, we present a novel GPU method to compute a costmap: a per-pixel cost estimate of the ray tracing rendering process. We show that the cost map is a powerful tool to improve load balancing in parallel ray tracing, and it can be used for adaptive task partitioning and enhanced dynamic load balancing. Its effectiveness has been proven in a parallel ray tracer implementation tailored for a cluster of workstations

    Detecting Bias in Monte Carlo Renderers using Welch’s t-test

    Get PDF
    When checking the implementation of a new renderer, one usually compares the output to that of a reference implementation. However, such tests require a large number of samples to be reliable, and sometimes they are unable to reveal very subtle differences that are caused by bias, but overshadowed by random noise. We propose using Welch’s t-test, a statistical test that reliably finds small bias even at low sample counts. Welch’s t-test is an established method in statistics to determine if two sample sets have the same underlying mean, based on sample statistics. We adapt it to test whether two renderers converge to the same image, i.e., the same mean per pixel or pixel region. We also present two strategies for visualizing and analyzing the test’s results, assisting us in localizing especially problematic image regions and detecting biased implementations with high confidence at low sample counts both for the reference and tested implementation

    Prism Parallax Occlusion Mapping with Accurate Silhouette Generation

    Get PDF
    no notePer-pixel displacement mapping algorithms such as [Policarpo et al. 2005; Tatarchuk 2006] became very popular recently as they can take advantage of the parallel nature of programmable GPU pipelines and render detailed surfaces at highly interactive rates. These approaches exhibit pleasing visual quality and render motion parallax effects, however, most of them suffer from lack of correct silhouettes. We perform ray-surface intersection in a volume given by prisms extruded from the input mesh triangles in the direction of the normal. The displaced surface is embedded in the volume of these prisms, bounded by a top and a bottom triangle and three bilinear patches (slabs). [Hirche et al. 2004] propose to triangulate the slabs and split the prisms into three tetrahedra. A consistent triangulation of adjacent prisms ensures that no gaps between tetrahedra exist and no tetrahedra overlap. Ray marching through tetrahedra is then straightforward as texture gradients (for marching along the ray) can be computed per tetrahedron

    Path Guiding with Vertex Triplet Distributions

    Get PDF
    Good importance sampling strategies are decisive for the quality and robustness of photorealistic image synthesis with Monte Carlo integration. Path guiding approaches use transport paths sampled by an existing base sampler to build and refine a guiding distribution. This distribution then guides subsequent paths in regions that are otherwise hard to sample. We observe that all terms in the measurement contribution function sampled during path construction depend on at most three consecutive path vertices. We thus propose to build a 9D guiding distribution over vertex triplets that adapts to the full measurement contribution with a 9D Gaussian mixture model (GMM). For incremental path sampling, we query the model for the last two vertices of a path prefix, resulting in a 3D conditional distribution with which we sample the next vertex along the path. To make this approach scalable, we partition the scene with an octree and learn a local GMM for each leaf separately. In a learning phase, we sample paths using the current guiding distribution and collect triplets of path vertices. We resample these triplets online and keep only a fixed-size subset in reservoirs. After each progression, we obtain new GMMs from triplet samples by an initial hard clustering followed by expectation maximization. Since we model 3D vertex positions, our guiding distribution naturally extends to participating media. In addition, the symmetry in the GMM allows us to query it for paths constructed by a light tracer. Therefore our method can guide both a path tracer and light tracer from a jointly learned guiding distribution

    TileTrees

    Get PDF
    International audienceTexture mapping with atlases suffer from several drawbacks: Wasted memory, seams, uniform resolution and no support of implicit surfaces. Texture mapping in a volume solves most of these issues, but unfortunately it induces an important space and time overhead. To address this problem, we introduce the TileTree: A novel data structure for texture mapping surfaces. TileTrees store square texture tiles into the leaves of an octree surrounding the surface. At rendering time the surface is projected onto the tiles, and the color is retrieved by a simple 2D texture fetch into a tile map. This avoids the dif culties of global planar parameterizations while still mapping large pieces of surface to regular 2D textures. Our method is simple to implement, does not require long pre-processing time, nor any modi cation of the textured geometry. It is not limited to triangle meshes. The resulting texture has little distortion and is seamlessly interpolated over smooth surfaces. Our method natively supports adaptive resolution. We show that TileTrees are more compact than other volume approaches, while providing fast access to the data. We also describe an interactive painting application, enabling to create, edit and render objects without having to convert between texture representations
    corecore